UK warns businesses to address cyber risks amid Anthropic AI panic
The British government warned businesses in an open letter Wednesday to strengthen their cyber defenses, reinforcing longstanding guidance as concern grows over how advances in artificial intelligence could reshape the threat landscape.
The warning follows heightened concern after AI company Anthropic last week unveiled its latest model, Mythos, which it said demonstrated advanced capabilities that could accelerate how quickly software vulnerabilities are automatically discovered and exploited.
The announcement has triggered concern beyond the tech sector, with the governor of the Bank of England, Andrew Bailey, warning of potential systemic cyber risks and security experts debating how quickly such capabilities could translate into real-world attacks.
Some researchers, including security technologist Bruce Schneier, cautioned that early claims risk overstating the immediate threat of Mythos, noting that similar warnings have accompanied previous generations of AI-enabled tools, while the steady spread of increasingly accessible hacking capabilities should serve as a prompt for organizations to reassess their defenses.
The British government’s letter cited an evaluation by the U.K.’s AI Security Institute (AISI), which found Mythos was “more capable at cyber offence than any model we have previously assessed,” but also stressed significant limitations that complicate comparisons with real-world threats.
Experts say AI is already increasing the speed and scale at which vulnerabilities can be identified and exploited, raising the stakes for companies across all sectors.
In a companion blog post to the open letter, Richard Horne, chief executive of the National Cyber Security Centre, said “we will increasingly see AI exposing those organisations that have not taken appropriate steps to safeguard their cyber security.”
However, Horne said that while AI-enabled tools could improve attackers’ ability to find and exploit weaknesses, defenders who adopt the technology effectively could also strengthen detection and response.
“Frontier AI models’ capabilities to find vulnerabilities in code can ultimately be a good thing for our cyber security,” he said.
In its report, the AISI described Mythos as “at least capable of autonomously attacking small, weakly defended and vulnerable enterprise systems where access to a network has been gained.”
But it emphasized that its test environments were deliberately simplified and easier to compromise than real-world systems, lacking active security teams, monitoring tools and the risk of detection — factors that typically constrain attackers.
The institute said this made it difficult to assess how such systems would perform against well-defended networks, tempering some of the more dramatic interpretations of the model’s capabilities.
More advanced forms of AI-enabled cyber operations may already be emerging in state-backed programs. Earlier this year, reporting by Recorded Future News detailed leaked Chinese technical documents outlining efforts to build AI systems capable of navigating defended networks while avoiding detection.
The documents suggest a focus not just on finding vulnerabilities, but on sustaining covert access — a more complex challenge than those tested in controlled AI evaluations.
Ciaran Martin, a professor at the University of Oxford’s Blavatnik School of Government, said the institute’s evaluation of Mythos helps bring greater realism to debate around AI-driven cyber risks.
“The AISI report has brought much needed rigorous realism to the frenzy. It shows that the hacking capabilities of AI are speeding up even more rapidly than previously thought,” Martin said.
“On the other hand the AISI are clear about the limitations of testing: in effect they say Mythos is like a striker in football who is brilliant at scoring goals against teams with no goalkeeper but is so far untested against the likes of [Italian keeper] Gianluigi Donnarumma. Yet again it comes back to the need for urgent attention being given to upgrading defences in the transfer window Anthropic and others are giving us.”
The government’s letter repeats previous calls for executives to take direct responsibility for cyber resilience, stressing that leadership engagement is critical. It urges companies to invest in basic security measures, understand their exposure to cyber risks and ensure they can respond and recover from incidents.
Much of the message echoes earlier guidance for companies to adopt baseline cyber security practices, including patching systems, monitoring networks and preparing incident response plans — measures officials have warned for years remain unevenly implemented across industry.
Alexander Martin
is the UK Editor for Recorded Future News. He was previously a technology reporter for Sky News and a fellow at the European Cyber Conflict Research Initiative, now Virtual Routes. He can be reached securely using Signal on: AlexanderMartin.79



